37 research outputs found

    Can we use Google Scholar to identify highly-cited documents?

    Get PDF
    The main objective of this paper is to empirically test whether the identification of highly-cited documents through Google Scholar is feasible and reliable. To this end, we carried out a longitudinal analysis (1950 to 2013), running a generic query (filtered only by year of publication) to minimise the effects of academic search engine optimisation. This gave us a final sample of 64,000 documents (1,000 per year). The strong correlation between a document’s citations and its position in the search results (r= -0.67) led us to conclude that Google Scholar is able to identify highly-cited papers effectively. This, combined with Google Scholar’s unique coverage (no restrictions on document type and source), makes the academic search engine an invaluable tool for bibliometric research relating to the identification of the most influential scientific documents. We find evidence, however, that Google Scholar ranks those documents whose language (or geographical web domain) matches with the user’s interface language higher than could be expected based on citations. Nonetheless, this language effect and other factors related to the Google Scholar’s operation, i.e. the proper identification of versions and the date of publication, only have an incidental impact. They do not compromise the ability of Google Scholar to identify the highly-cited papers

    Google Scholar Metrics evolution: an analysis according to languages

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11192-013-1164-8In November 2012 the Google Scholar Metrics (GSM) journal rankings were updated, making it possible to compare bibliometric indicators in the ten languages indexed—and their stability—with the April 2012 version. The h-index and h-5 median of 1,000 journals were analysed, comparing their averages, maximum and minimum values and the correlation coefficient within rankings. The bibliometric figures grew significantly. In just seven and a half months the h-index of the journals increased by 15 % and the median h-index by 17 %. This growth was observed for all the bibliometric indicators analysed and for practically every journal. However, we found significant differences in growth rates depending on the language in which the journal is published. Moreover, the journal rankings seem to be stable between April and November, reinforcing the credibility of the data held by Google Scholar and the reliability of the GSM journal rankings, despite the uncontrolled growth of Google Scholar. Based on the findings of this study we suggest, firstly, that Google should upgrade its rankings at least semi-annually and, secondly, that the results should be displayed in each ranking proportionally to the number of journals indexed by language.Orduña Malea, E.; Delgado López-Cózar, E. (2014). Google Scholar Metrics evolution: an analysis according to languages. Scientometrics. 98(3):2353-2367. doi:10.1007/s11192-013-1164-8S23532367983Aguillo, & Isidro, F. (2012). Is Google Scholar useful for bibliometrics? A webometric analysis. Scientometrics, 91(2), 343–351.Brewington, B. E., & Cybenko, G. (2000). How dynamic is the Web? Computer Networks, 33(1–6), 257–276.Chen, X. (2010). Google Scholar’s dramatic coverage improvement five years after debut. Serials Review, 36(4), 221–226.Cho, Y. & Garcia-Molina, H. (2000). The evolution of the web and implications for an incremental crawler. Proceedings of the 26th International Conference on very large data bases, 200–209.Costas, R., & Bordons, M. (2007). The h-index: advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Informetrics, 1(3), 193–203.de Winter, J. C. F., Zadpoor, A. A., & Dodou, D. (2013). The expansion of Google Scholar versus Web of Science: a longitudinal study. Scientometrics. doi: 10.1007/s11192-013-1089-2 .Delgado López-Cózar, E., & Cabezas-Clavijo, A. (2012). Google Scholar Metrics: an unreliable tool for assessing scientific journals. El profesional de la información, 21(4), 419–427.Delgado López-Cózar, E., & Cabezas-Clavijo, A. (2013). Ranking journals: could Google Scholar metrics be an alternative to journal citation reports and Scimago journal ranks. Learned publishing, 26(2), 101–114.Fetterly, D., Manasse, M., Najork, M. & Wiener, J. (2003). A large scale study of the evolution of web pages. Proceedings of the Twelfth International Conference on World Wide Web, 669–678.Harzing, A.-W. (2013). A preliminary test of Google Scholar as a source for citation data: a longitudinal study of Nobel prize winners. Scientometrics, 94(3), 1057–1075.Jacsó, P. (2012). Google Scholar Metrics for Publications—The software and content feature of a new open access bibliometric service. Online Information Review, 36(4), 604–619.Koehler, W. (2002). Web page change and persistence-4-year longitudinal web study. Journal of the American Society for Information Science and Technology, 53(2), 162–171.Koehler, W (2004). A longitudinal study of Web pages continued a consideration of document persistence. Information Research, 9(2). http://informationr.net/ir/9-2/paper174.html . Accessed 1 Sep 2013.Kousha, K., & Thelwall, M. (2007). Google Scholar Citations and Google Web/URL citations: a multidiscipline exploratory analysis. Journal of the American Society for Information Science and Technology, 58(7), 1055–1065.Leydesdorff, L. (2012). World shares of publications of the USA, EU-27, and China compared and predicted using the new Web of Science interface versus Scopus. El profesional de la información, 21(1), 43–49.Neuhaus, C., Neuhaus, E., Asher, A., & Wrede, C. (2006). The depth and breadth of Google Scholar: An empirical study. Libraries and the Academy, 6(2), 127–141.Orduña-Malea, E., Serrano-Cobos, J., & Lloret-Romero, N. (2009). Las universidades públicas españolas en Google Scholar: presencia y evolución de su publicación académica web. El profesional de la información, 18(5), 493–500.Orduña-Malea, E., Serrano-Cobos, J., Ontalba-Ruipérez, J.-A., & Lloret-Romero, N. (2010). Presencia y visibilidad web de las universidades públicas españolas. Revista española de documentación científica, 33(2), 246–278.Ortega, J. L., Aguillo, I. F., & Prieto, J. A. (2006). Longitudinal study of contents and elements in the scientific Web environment. Journal of Information Science, 32(4), 344–351.Payne, N., & Thelwall, M. (2007). A longitudinal study of academic webs: growth and stabilization. Scientometrics, 71(3), 523–539

    Nature s Top 100 Re-Revisited

    Full text link
    "This is the peer reviewed version of the following article: Martín-Martín, A., Ayllon, J. M., López-Cózar, E. D., & Orduna-Malea, E. (2015). Nature's top 100 Re-revisited. JASIST, 66(12), 2714., which has been published in final form at http://doi.org/10.1002/asi.23570. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving."To mark the 50th anniversary of the Science Citation Index, Nature published a list of the 100 most-cited papers of all time. It also included an alternative ranking from data provided by Google Scholar, which, as this letter illustrates, contains certain inconsistencies. This does not, however, diminish the usefulness of Google Scholar, not only in identifying the most-cited articles of all time, but also in reflecting the impact of other document types (especially books), thus redefining the concept of academic impact. Keywords:Martín-Martín, A.; Ayllón, JM.; Delgado López-Cózar, E.; Orduña Malea, E. (2015). Nature s Top 100 Re-Revisited. Journal of the American Society for Information Science and Technology. 66(12):2714-2714. doi:10.1002/asi.23570271427146612Bornmann , L. Nature's top 100 revisited. Journal of the Association for Information Science and Technology http://www.lutz-bornmann.de/icons/top_100.pdfGarfield , E. 2005 The agony and the ecstasy-the history and meaning of the Journal Impact Factor http://www.garfield.library.upenn.edu/papers/jifchicago2005.pdfMartin-Martin , A. Orduna-Malea , E. Ayllon , J.M. Delgado Lopez-Cozar , E. 2014 Does Google Scholar contain all highly cited documents (1950-2013)? http://arxiv.org/abs/1410.8464Van Noorden, R., Maher, B., & Nuzzo, R. (2014). The top 100 papers. Nature, 514(7524), 550-553. doi:10.1038/514550

    Coverage of highly-cited documents in Google Scholar, Web of Science, and Scopus: a multidisciplinary comparison

    Get PDF
    This study explores the extent to which bibliometric indicators based on counts of highly-cited documents could be affected by the choice of data source. The initial hypothesis is that databases that rely on journal selection criteria for their document coverage may not necessarily provide an accurate representation of highly-cited documents across all subject areas, while inclusive databases, which give each document the chance to stand on its own merits, might be better suited to identify highly-cited documents. To test this hypothesis, an analysis of 2,515 highly-cited documents published in 2006 that Google Scholar displays in its Classic Papers product is carried out at the level of broad subject categories, checking whether these documents are also covered in Web of Science and Scopus, and whether the citation counts offered by the different sources are similar. The results show that a large fraction of highly-cited documents in the Social Sciences and Humanities (8.6%-28.2%) are invisible to Web of Science and Scopus. In the Natural, Life, and Health Sciences the proportion of missing highly-cited documents in Web of Science and Scopus is much lower. Furthermore, in all areas, Spearman correlation coefficients of citation counts in Google Scholar, as compared to Web of Science and Scopus citation counts, are remarkably strong (.83-.99). The main conclusion is that the data about highly-cited documents available in the inclusive database Google Scholar does indeed reveal significant coverage deficiencies in Web of Science and Scopus in several areas of research. Therefore, using these selective databases to compute bibliometric indicators based on counts of highly-cited documents might produce biased assessments in poorly covered areas.Alberto Martín-Martín enjoys a four-year doctoral fellowship (FPU2013/05863) granted by the Ministerio de Educación, Cultura, y Deportes (Spain)

    Do ResearchGate Scores create ghost academic reputations?

    Get PDF
    [EN] The academic social network site ResearchGate (RG) has its own indicator, RG Score, for its members. The high profile nature of the site means that the RG Score may be used for recruitment, promotion and other tasks for which researchers are evaluated. In response, this study investigates whether it is reasonable to employ the RG Score as evidence of scholarly reputation. For this, three different author samples were investigated. An outlier sample includes 104 authors with high values. A Nobel sample comprises 73 Nobel winners from Medicine and Physiology, Chemistry, Physics and Economics (from 1975 to 2015). A longitudinal sample includes weekly data on 4 authors with different RG Scores. The results suggest that high RG Scores are built primarily from activity related to asking and answering questions in the site. In particular, it seems impossible to get a high RG Score solely through publications. Within RG it is possible to distinguish between (passive) academics that interact little in the site and active platform users, who can get high RG Scores through engaging with others inside the site (questions, answers, social networks with influential researchers). Thus, RG Scores should not be mistaken for academic reputation indicators.Alberto Martin-Martin enjoys a four-year doctoral fellowship (FPU2013/05863) granted by the Ministerio de Educacion, Cultura, y Deporte (Spain). Enrique Orduna-Malea holds a postdoctoral fellowship (PAID-10-14), from the Polytechnic University of Valencia (Spain).Orduña Malea, E.; Martín-Martín, A.; Thelwall, M.; Delgado-López-Cózar, E. (2017). Do ResearchGate Scores create ghost academic reputations?. Scientometrics. 112(1):443-460. https://doi.org/10.1007/s11192-017-2396-9S4434601121Bosman, J. & Kramer, B. (2016). Innovations in scholarly communication—data of the global 2015–2016 survey. Available at: http://zenodo.org/record/49583 #. Accessed December 11, 2016.González-Díaz, C., Iglesias-García, M., & Codina, L. (2015). Presencia de las universidades españolas en las redes sociales digitales científicas: Caso de los estudios de comunicación. El profesional de la información, 24(5), 1699–2407.Goodwin, S., Jeng, W., & He, D. (2014). Changing communication on ResearchGate through interface updates. Proceedings of the American Society for Information Science and Technology, 51(1), 1–4.Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). The Leiden Manifesto for research metrics. Nature, 520(7548), 429–431.Hoffmann, C. P., Lutz, C., & Meckel, M. (2015). A relational altmetric? Network centrality on ResearchGate as an indicator of scientific impact. Journal of the Association for Information Science and Technology, 67(4), 765–775.Jiménez-Contreras, E., de Moya Anegón, F., & Delgado López-Cózar, E. (2003). The evolution of research activity in Spain: The impact of the National Commission for the Evaluation of Research Activity (CNEAI). Research Policy, 32(1), 123–142.Jordan, K. (2014a). Academics’ awareness, perceptions and uses of social networking sites: Analysis of a social networking sites survey dataset (December 3, 2014). Available at: http://dx.doi.org/10.2139/ssrn.2507318 . Accessed December 11, 2016.Jordan, K. (2014b). Academics and their online networks: Exploring the role of academic social networking sites. First Monday, 19(11). Available at: http://dx.doi.org/10.5210/fm.v19i11.4937 . Accessed December 11, 2016.Jordan, K. (2015). Exploring the ResearchGate score as an academic metric: reflections and implications for practice. Quantifying and Analysing Scholarly Communication on the Web (ASCW’15), 30 June 2015, Oxford. Available at: http://ascw.know-center.tugraz.at/wp-content/uploads/2015/06/ASCW15_jordan_response_kraker-lex.pdf . Accessed December 11, 2016.Kadriu, A. (2013). Discovering value in academic social networks: A case study in ResearchGate. Proceedings of the ITI 2013—35th Int. Conf. on Information Technology Interfaces Information Technology Interfaces, pp. 57–62.Kraker, P. & Lex, E. (2015). A critical look at the ResearchGate score as a measure of scientific reputation. Proceedings of the Quantifying and Analysing Scholarly Communication on the Web workshop (ASCW’15), Web Science conference 2015. Available at: http://ascw.know-center.tugraz.at/wp-content/uploads/2016/02/ASCW15_kraker-lex-a-critical-look-at-the-researchgate-score_v1-1.pdf . Accessed December 11, 2016.Li, L., He, D., Jeng, W., Goodwin, S. & Zhang, C. (2015). Answer quality characteristics and prediction on an academic Q&A Site: A case study on ResearchGate. Proceedings of the 24th International Conference on World Wide Web Companion, pp. 1453–1458.Martín-Martín, A., Orduna-Malea, E., Ayllón, J. M. & Delgado López-Cózar, E. (2016). The counting house: measuring those who count. Presence of Bibliometrics, Scientometrics, Informetrics, Webometrics and Altmetrics in the Google Scholar Citations, ResearcherID, ResearchGate, Mendeley & Twitter. Available at: https://arxiv.org/abs/1602.02412 . Accessed December 11, 2016.Martín-Martín, A., Orduna-Malea, E. & Delgado López-Cózar, E. (2016). The role of ego in academic profile services: Comparing Google Scholar, ResearchGate, Mendeley, and ResearcherID. Researchgate, Mendeley, and Researcherid. The LSE Impact of Social Sciences blog. Available at: http://blogs.lse.ac.uk/impactofsocialsciences/2016/03/04/academic-profile-services-many-mirrors-and-faces-for-a-single-ego . Accessed December 11, 2016.Matthews, D. (2016). Do academic social networks share academics’ interests?. Times Higher Education. Available at: https://www.timeshighereducation.com/features/do-academic-social-networks-share-academics-interests . Accessed December 11, 2016.Memon, A. R. (2016). ResearchGate is no longer reliable: leniency towards ghost journals may decrease its impact on the scientific community. Journal of the Pakistan Medical Association, 66(12), 1643–1647.Mikki, S., Zygmuntowska, M., Gjesdal, Ø. L. & Al Ruwehy, H. A. (2015). Digital presence of norwegian scholars on academic network sites-where and who are they?. Plos One 10(11). Available at: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0142709 . Accessed December 11, 2016.Nicholas, D., Clark, D., & Herman, E. (2016). ResearchGate: Reputation uncovered. Learned Publishing, 29(3), 173–182.Orduna-Malea, E., Martín-Martín, A., & Delgado López-Cózar, E. (2016). The next bibliometrics: ALMetrics (Author Level Metrics) and the multiple faces of author impact. El profesional de la información, 25(3), 485–496.Ortega, Jose L. (2015). Relationship between altmetric and bibliometric indicators across academic social sites: The case of CSIC’s members. Journal of informetrics, 9(1), 39–49.Ortega, Jose L. (2016). Social network sites for scientists. Cambridge: Chandos.Ovadia, S. (2014). ResearchGate and Academia. edu: Academic social networks. Behavioral & Social Sciences Librarian, 33(3), 165–169.Thelwall, M., & Kousha, K. (2015). ResearchGate: Disseminating, communicating, and measuring Scholarship? Journal of the Association for Information Science and Technology, 66(5), 876–889.Thelwall, M. & Kousha, K. (2017). ResearchGate articles: Age, discipline, audience size and impact. Journal of the Association for Information Science and Technology, 68(2), 468–479.Van Noorden, R. (2014). Online collaboration: Scientists and the social network. Nature, 512(7513), 126–129.Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S. et al. (2015). The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management. HEFCE. Available at: http://doi.org/10.13140/RG.2.1.4929.1363 . Accessed December 11, 2016

    ResearchGate versus Google Scholar: Which finds more early citations?

    Get PDF
    ResearchGate has launched its own citation index by extracting citations from documents uploaded to the site and reporting citation counts on article profile pages. Since authors may upload preprints to ResearchGate, it may use these to provide early impact evidence for new papers. This article assesses the whether the number of citations found for recent articles is comparable to other citation indexes using 2675 recently-published library and information science articles. The results show that in March 2017, ResearchGate found less citations than did Google Scholar but more than both Web of Science and Scopus. This held true for the dataset overall and for the six largest journals in it. ResearchGate correlated most strongly with Google Scholar citations, suggesting that ResearchGate is not predominantly tapping a fundamentally different source of data than Google Scholar. Nevertheless, preprint sharing in ResearchGate is substantial enough for authors to take seriously

    Research misconduct in the fields of ethics and philosophy: researchers’ perceptions in Spain

    Get PDF
    This is the Author’s Original Manuscript (AOM) (also called a “preprint”) sent to review to Science and Engineering Ethics on 11/10/2020. The final version of the article was published online at SEE on 21/01/2021. The online version is available at: https://doi.org/10.1007/s11948-021-00278-wEmpirical studies have revealed a disturbing prevalence of research misconduct in a wide variety of disciplines, although not, to date, in the areas of ethics and philosophy. This study aims to provide empirical evidence on perceptions of how serious a problem research misconduct is in these two disciplines in Spain, particularly regarding the effects that the model used to evaluate academics’ research performance may have on their ethical behaviour. The methodological triangulation applied in the study combines a questionnaire, a debate at the annual meeting of scientific association, and in-depth interviews. Of the 541 questionnaires sent out, 201 responses were obtained (37.1% of the total sample), with a significant difference in the participation of researchers in philosophy (30.5%) and in ethics (52.8%); 26 researchers took part in the debate and 14 interviews were conducted. The questionnaire results reveal that 91.5% of the respondents considered research misconduct to be on the rise; 63.2% considered at least three of the fraudulent practices referred to in the study to be commonplace, and 84.1% identified two or more such practices. The researchers perceived a high prevalence of duplicate publication (66.5%) and self-plagiarism (59.0%), use of personal influence (57.5%) and citation manipulation (44.0%), in contrast to a low perceived incidence of data falsification or fabrication (10.0%). The debate and the interviews corroborated these data. Researchers associated the spread of these misconducts with the research evaluation model applied in Spain

    Can Microsoft Academic be used for citation analysis of preprint archives? The case of the Social Science Research Network

    Get PDF
    This is an accepted manuscript of an article published by Springer in Scientometrics on 07/03/2018, available online: https://doi.org/10.1007/s11192-018-2704-z The accepted version of the publication may differ from the final published version.Preprint archives play an important scholarly communication role within some fields. The impact of archives and individual preprints are difficult to analyse because online repositories are not indexed by the Web of Science or Scopus. In response, this article assesses whether the new Microsoft Academic can be used for citation analysis of preprint archives, focusing on the Social Science Research Network (SSRN). Although Microsoft Academic seems to index SSRN comprehensively, it groups a small fraction of SSRN papers into an easily retrievable set that has variations in character over time, making any field normalisation or citation comparisons untrustworthy. A brief parallel analysis of arXiv suggests that similar results would occur for other online repositories. Systematic analyses of preprint archives are nevertheless possible with Microsoft Academic when complete lists of archive publications are available from other sources because of its promising coverage and citation results

    Using Google Scholar Institutional Level Data to Evaluate the Quality of University Research

    Get PDF
    In recent years, the extent of formal research evaluation, at all levels from the individual to the multiversity has increased dramatically. At the institutional level, there are world university rankings based on an ad hoc combination of different indicators. There are also national exercises, such as those in the UK and Australia that evaluate research outputs and environment through peer review panels. These are extremely costly and time consuming. This paper evaluates the possibility of using Google Scholar (GS) institutional level data to evaluate university research in a relatively automatic way. Several citation-based metrics are collected from GS for all 130 UK universities. These are used to evaluate performance and produce university rankings which are then compared with various rankings based on the 2014 UK Research Excellence Framework (REF). The rankings are shown to be credible and to avoid some of the obvious problems of the REF ranking, as well as being highly efficient and cost effective. We also investigate the possibility of normalizing the results for the university subject mix since science subjects generally produce significantly more citations than social science or humanities
    corecore